Structural Risk Minimization and Rademacher Complexity for Regression

نویسندگان

  • Davide Anguita
  • Alessandro Ghio
  • Luca Oneto
  • Sandro Ridella
چکیده

The Structural Risk Minimization principle allows estimating the generalization ability of a learned hypothesis by measuring the complexity of the entire hypothesis class. Two of the most recent and effective complexity measures are the Rademacher Complexity and the Maximal Discrepancy, which have been applied to the derivation of generalization bounds for kernel classifiers. In this work, we extend their application to the regression framework.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Rademacher penalties and structural risk minimization

We suggest a penalty function to be used in various problems of structural risk minimization. This penalty is data dependent and is based on the sup-norm of the so called Rademacher process indexed by the underlying class of functions (sets). The standard complexity penalties, used in learning problems and based on the VCdimensions of the classes, are conservative upper bounds (in a probabilist...

متن کامل

A Risk Minimization Principle for a Class of Parzen Estimators

This paper1 explores the use of a Maximal Average Margin (MAM) optimality principle for the design of learning algorithms. It is shown that the application of this risk minimization principle results in a class of (computationally) simple learning machines similar to the classical Parzen window classifier. A direct relation with the Rademacher complexities is established, as such facilitating a...

متن کامل

A Tight Excess Risk Bound via a Unified PAC-Bayesian-Rademacher-Shtarkov-MDL Complexity

We present a novel notion of complexity that interpolates between and generalizes some classic existing complexity notions in learning theory: for estimators like empirical risk minimization (ERM) with arbitrary bounded losses, it is upper bounded in terms of data-independent Rademacher complexity; for generalized Bayesian estimators, it is upper bounded by the data-dependent information comple...

متن کامل

Structural Return Maximization for Reinforcement Learning

Batch Reinforcement Learning (RL) algorithms attempt to choose a policy from a designer-provided class of policies given a fixed set of training data. Choosing the policy which maximizes an estimate of return often leads to over-fitting when only limited data is available, due to the size of the policy class in relation to the amount of data available. In this work, we focus on learning policy ...

متن کامل

Medallion Lecture Local Rademacher Complexities and Oracle Inequalities in Risk Minimization

Let F be a class of measurable functions f :S 7→ [0,1] defined on a probability space (S,A, P ). Given a sample (X1, . . . ,Xn) of i.i.d. random variables taking values in S with common distribution P , let Pn denote the empirical measure based on (X1, . . . ,Xn). We study an empirical risk minimization problem Pnf →min, f ∈ F . Given a solution f̂n of this problem, the goal is to obtain very ge...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2012